14 research outputs found
Learning-based Localizability Estimation for Robust LiDAR Localization
LiDAR-based localization and mapping is one of the core components in many
modern robotic systems due to the direct integration of range and geometry,
allowing for precise motion estimation and generation of high quality maps in
real-time. Yet, as a consequence of insufficient environmental constraints
present in the scene, this dependence on geometry can result in localization
failure, happening in self-symmetric surroundings such as tunnels. This work
addresses precisely this issue by proposing a neural network-based estimation
approach for detecting (non-)localizability during robot operation. Special
attention is given to the localizability of scan-to-scan registration, as it is
a crucial component in many LiDAR odometry estimation pipelines. In contrast to
previous, mostly traditional detection approaches, the proposed method enables
early detection of failure by estimating the localizability on raw sensor
measurements without evaluating the underlying registration optimization.
Moreover, previous approaches remain limited in their ability to generalize
across environments and sensor types, as heuristic-tuning of degeneracy
detection thresholds is required. The proposed approach avoids this problem by
learning from a collection of different environments, allowing the network to
function over various scenarios. Furthermore, the network is trained
exclusively on simulated data, avoiding arduous data collection in challenging
and degenerate, often hard-to-access, environments. The presented method is
tested during field experiments conducted across challenging environments and
on two different sensor types without any modifications. The observed detection
performance is on par with state-of-the-art methods after environment-specific
threshold tuning.Comment: 8 pages, 7 figures, 4 table
Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control
Fast feedback control and safety guarantees are essential in modern robotics.
We present an approach that achieves both by combining novel robust model
predictive control (MPC) with function approximation via (deep) neural networks
(NNs). The result is a new approach for complex tasks with nonlinear,
uncertain, and constrained dynamics as are common in robotics. Specifically, we
leverage recent results in MPC research to propose a new robust setpoint
tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic
setpoint while guaranteeing stability and constraint satisfaction. The
presented robust MPC scheme constitutes a one-layer approach that unifies the
often separated planning and control layers, by directly computing the control
command based on a reference and possibly obstacle positions. As a separate
contribution, we show how the computation time of the MPC can be drastically
reduced by approximating the MPC law with a NN controller. The NN is trained
and validated from offline samples of the MPC, yielding statistical guarantees,
and used in lieu thereof at run time. Our experiments on a state-of-the-art
robot manipulator are the first to show that both the proposed robust and
approximate MPC schemes scale to real-world robotic systems.Comment: 8 pages, 4 figures
X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments
Modern robotic systems are required to operate in challenging environments,
which demand reliable localization under challenging conditions. LiDAR-based
localization methods, such as the Iterative Closest Point (ICP) algorithm, can
suffer in geometrically uninformative environments that are known to
deteriorate point cloud registration performance and push optimization toward
divergence along weakly constrained directions. To overcome this issue, this
work proposes i) a robust fine-grained localizability detection module, and ii)
a localizability-aware constrained ICP optimization module, which couples with
the localizability detection module in a unified manner. The proposed
localizability detection is achieved by utilizing the correspondences between
the scan and the map to analyze the alignment strength against the principal
directions of the optimization as part of its fine-grained LiDAR localizability
analysis. In the second part, this localizability analysis is then integrated
into the scan-to-map point cloud registration to generate drift-free pose
updates by enforcing controlled updates or leaving the degenerate directions of
the optimization unchanged. The proposed method is thoroughly evaluated and
compared to state-of-the-art methods in simulated and real-world experiments,
demonstrating the performance and reliability improvement in LiDAR-challenging
environments. In all experiments, the proposed framework demonstrates accurate
and generalizable localizability detection and robust pose estimation without
environment-specific parameter tuning.Comment: 20 Pages, 20 Figures Submitted to IEEE Transactions On Robotics.
Supplementary Video: https://youtu.be/SviLl7q69aA Project Website:
https://sites.google.com/leggedrobotics.com/x-ic
Open3D SLAM: Point Cloud Based Mapping and Localization for Education
Modern LiDAR SLAM systems have shown remarkable performance and ability to operate in various environments ranging from indoor offices to large natural environments such as forests. This versatility has been afforded through many years of research that improved SLAM system components toward reliable real-time operation. However, achieving real-time computation has come at the cost of increased complexity and specific assumptions about the point cloud representation (e.g., LOAM and its variants). This extra complexity makes it more difficult for a non-expert or a student to dive into the field since extra effort is required to understand the ideas enabling real-time computation. On the other hand, the latter ideas often leave the underlying algorithmic principles unchanged. Furthermore, since SLAM performance is highly dependent on the implementation quality, the performance difference is often not caused by the underlying algorithm itself but rather by the implementation quality. Open3D SLAM tries to overcome these issues. We investigate using well-understood algorithms in their basic form to build the proposed LIDAR-based SLAM system. Our system leverages the Open3D library, which is well maintained and performant, thus contributing to the implementation quality and rendering Open3D SLAM open for future enhancements. Initial tests suggest that using basic algorithms as SLAM building blocks is viable on modern CPUs. We can build high-quality maps in different environments ranging from large outdoor scenes to small office environments. The generality of the proposed solution is demonstrated using different laser sensors deployed on various robotic platforms. We hope to make point cloud-based SLAM more accessible, thus facilitating teaching and enabling a new generation of mapping researchers to enter the field easier. Open3D SLAM will be used in the 3rd edition of ETH Robotic Summer School in July 2022, in Wangen a.d. Aare. The code is available on GitHub: https://github.com/leggedrobotics/open3d_sla
Self-Supervised Learning of LiDAR Odometry for Robotic Applications
Reliable robot pose estimation is a key building block of many robot autonomy pipelines, with LiDAR localization being an active research domain. In this work, a versatile self-supervised LiDAR odometry estimation method is presented, in order to enable the efficient utilization of all available LiDAR data while maintaining real-time performance. The proposed approach selectively applies geometric losses during training, being cognizant of the amount of information that can be extracted from scan points. In addition, no labeled or ground-truth data is required, hence making the presented approach suitable for pose estimation in applications where accurate ground-truth is difficult to obtain. Furthermore, the presented network architecture is applicable to a wide range of environments and sensor modalities without requiring any network or loss function adjustments. The proposed approach is thoroughly tested for both indoor and outdoor real-world applications through a variety of experiments using legged, tracked and wheeled robots, demonstrating the suitability of learning-based LiDAR odometry for complex robotic applications
Graph-based Multi-sensor Fusion for Consistent Localization of Autonomous Construction Robots
Enabling autonomous operation of large-scale construction machines, such as excavators, can bring key benefits for human safety and operational opportunities for applications in dangerous and hazardous environments. To facilitate robot autonomy, robust and accurate state-estimation remains a core component to enable these machines for operation in a diverse set of complex environments.
In this work, a method for multi-modal sensor fusion for robot state-estimation and localization
is presented, enabling operation of construction robots in real-world scenarios. The proposed approach presents a graph-based prediction-update loop that combines the benefits of filtering and smoothing in order to provide consistent state estimates at high update rate, while maintaining accurate global localization for large-scale earth-moving excavators.
Furthermore, the proposed approach enables a flexible integration of asynchronous sensor measurements and provides consistent pose estimates even during phases of sensor dropout. For this purpose, a dual-graph design for switching between two distinct optimization problems is proposed, directly addressing temporary failure and the subsequent return of global position estimates.
The proposed approach is implemented on-board two Menzi Muck walking excavators and validated during real-world tests conducted in representative operational environments